Support Vector Machines Project

The Data

For this series of lectures, we will be using the famous Iris flower data set.

The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by Sir Ronald Fisher in the 1936 as an example of discriminant analysis.

The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor), so 150 total samples. Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters.

Here's a picture of the three different Iris types:

In [17]:
# The Iris Setosa
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg'
Image(url,width=300, height=300)
Out[17]:
In [18]:
# The Iris Versicolor
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/4/41/Iris_versicolor_3.jpg'
Image(url,width=300, height=300)
Out[18]:
In [19]:
# The Iris Virginica
from IPython.display import Image
url = 'http://upload.wikimedia.org/wikipedia/commons/9/9f/Iris_virginica.jpg'
Image(url,width=300, height=300)
Out[19]:

The iris dataset contains measurements for 150 iris flowers from three different species.

The three classes in the Iris dataset:

Iris-setosa (n=50)
Iris-versicolor (n=50)
Iris-virginica (n=50)

The four features of the Iris dataset:

sepal length in cm
sepal width in cm
petal length in cm
petal width in cm

Get the data

Use seaborn to get the iris data by using: iris = sns.load_dataset('iris')

In [1]:
import seaborn as sns
iris=sns.load_dataset('iris')

Let's visualize the data and get you started!

Exploratory Data Analysis

Create a pairplot of the data set. Which flower species seems to be the most separable?

In [9]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
In [18]:
iris.head()
Out[18]:
sepal_length sepal_width petal_length petal_width species
0 5.1 3.5 1.4 0.2 setosa
1 4.9 3.0 1.4 0.2 setosa
2 4.7 3.2 1.3 0.2 setosa
3 4.6 3.1 1.5 0.2 setosa
4 5.0 3.6 1.4 0.2 setosa

Create a kde plot of sepal_length versus sepal width for setosa species of flower.

In [7]:
sns.pairplot(iris,hue='species',palette='Dark2')
Out[7]:
<seaborn.axisgrid.PairGrid at 0x2d454f00780>
In [ ]:
# as we csn see setosa most sepratable species
# lets explore KDE plot for setosa to explore more
In [14]:
setosa = iris[iris['species']=='setosa']
sns.kdeplot( setosa['sepal_width'], setosa['sepal_length'],
                 cmap="plasma", shade=True, shade_lowest=False)
Out[14]:
<matplotlib.axes._subplots.AxesSubplot at 0x2d456a1e208>

Train Test Split

Split your data into a training set and a testing set.

In [15]:
from sklearn.cross_validation import train_test_split
In [25]:
X=iris.drop('species', axis=1)
y=iris['species']

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=101)

Train a Model

Now its time to train a Support Vector Machine Classifier.

Call the SVC() model from sklearn and fit the model to the training data.

In [26]:
from sklearn.svm import SVC
In [30]:
svc_model=SVC()
In [31]:
svc_model.fit(X_train,y_train)
Out[31]:
SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
  decision_function_shape='ovr', degree=3, gamma='auto', kernel='rbf',
  max_iter=-1, probability=False, random_state=None, shrinking=True,
  tol=0.001, verbose=False)

Model Evaluation

Now get predictions from the model and create a confusion matrix and a classification report.

In [33]:
prediction=svc_model.predict(X_test)
In [34]:
from sklearn.metrics import classification_report,confusion_matrix
In [35]:
print(confusion_matrix(y_test,prediction))
[[13  0  0]
 [ 0 20  0]
 [ 0  0 12]]
In [36]:
print(classification_report(y_test,prediction))
             precision    recall  f1-score   support

     setosa       1.00      1.00      1.00        13
 versicolor       1.00      1.00      1.00        20
  virginica       1.00      1.00      1.00        12

avg / total       1.00      1.00      1.00        45

Wow! model was pretty good!